Romano Law
Home /Blogs/Are AI Deepfakes Illegal? A Legal Guide to Deepfake Laws, Liability, and Enforcement
March 23, 2026 | CopyrightEntertainmentTechnologyTrademark

Are AI Deepfakes Illegal? A Legal Guide to Deepfake Laws, Liability, and Enforcement

post image
Author(s)

As AI tools become more advanced and accessible, the line between authentic and fabricated content continues to blur. Legislatures and courts are rapidly adapting, but legal frameworks remain fragmented. Businesses and creators must stay proactive in understanding evolving laws, platform policies, and contractual protections.

What is a deepfake?

A deepfake is AI-generated or AI-manipulated content, such as video, audio, or images, that realistically replicates a person’s face, voice, or identity. These tools can create convincing but fabricated content, often making it appear that an individual said or did something they never actually did.

As of 2026, deepfakes can trigger claims under right of publicity, defamation, fraud, copyright infringement, and increasingly, specific anti-deepfake statutes. The federal TAKE IT DOWN Act and laws in states like California, Texas, and New York now provide direct legal remedies for victims of non-consensual AI-generated content.

Right of Publicity and AI Deepfakes

The “right of publicity” allows individuals to control how their name, image, and likeness are used for commercial purposes. Celebrities and, increasingly, influencers and public figures, can generate significant income through endorsements, making unauthorized use particularly harmful.

There is no uniform federal law governing publicity rights, and protections vary significantly by state. However, using AI to replicate a person’s likeness in advertising without permission will often violate these rights.

What Is Happening to Celebrities?

Unauthorized AI-generated ads featuring celebrities continue to rise, but enforcement has also increased.

In 2023, Scarlett Johansson appeared in an AI-generated advertisement without her consent, prompting swift legal action and removal of the content. Similar incidents involved Tom Hanks and Gayle King, each publicly warning audiences about the prevalence of unauthorized AI content.

Since then, platforms such as YouTube, Meta, and TikTok have introduced stricter policies targeting deceptive AI content, including labeling requirements and removal procedures for non-consensual deepfakes. At the same time, lawsuits involving AI-generated likenesses have expanded, particularly in advertising and entertainment contexts. Not all uses are unauthorized. Some public figures, including actors and voice performers, have entered agreements licensing their likenesses for AI use. These agreements highlight the importance of consent and contract clarity in avoiding liability.

Liability for Deepfake Content

Liability for the unauthorized use of AI content may extend to multiple parties, including the creator of the deepfake, the company distributing it, and potentially platforms hosting the content. While platforms often rely on intermediary liability protections, increasing regulatory pressure and platform policies are narrowing those protections in certain contexts.

Deepfake Damages

Victims of unauthorized deepfakes may seek damages for lost income, reputational harm, emotional distress, and unjust enrichment. In some cases, statutory damages or injunctive relief, such as removal of the content, may also be available under applicable laws.

How Do Entertainment Industry Contracts Address AI Likeness Rights?

Contracts for actors, musicians, and influencers are increasingly addressing AI usage rights. These agreements may define whether a party can replicate a person’s likeness, voice, or performance using AI, and under what conditions. Clear contractual language is essential to avoid disputes over ownership and consent.

Key Legal Risks of AI-Generated Content

  • Right of Publicity Violations: Using a person’s likeness without consent, particularly in commercial contexts, can result in significant liability under state publicity laws.
  • Copyright Infringement: AI-generated content that incorporates or is trained on copyrighted material may infringe the rights of original creators. Unauthorized use of film clips, images, or other protected works can create additional exposure.
  • Trademark Issues: If AI-generated content promotes or references branded products without authorization, it may infringe trademark rights or create consumer confusion.
  • Fraud and Misrepresentation: Deepfakes used to mislead consumers, such as fake endorsements, can expose creators and distributors to fraud claims and regulatory enforcement.

Conclusion

AI-generated content presents significant legal opportunities and risks. Unauthorized deepfakes can trigger multiple forms of liability, particularly when used in advertising or commercial contexts.

Contracts, consent, and compliance are now essential in navigating AI-related content creation. Those working with AI-generated media should consult experienced legal counsel to ensure proper rights are secured and risks are minimized.

Contact Romano Law to learn how we can help protect your rights and navigate the evolving legal landscape of AI-generated content.

Contributions to this blog by Kennedy McKinney 

Photo by Maxim Hopman on Unsplash
Share This
Romano Law
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.